1,307 research outputs found
Using Barriers to Reduce the Sensitivity to Edge Miscalculations of Casting-Based Object Projection Feature Estimation
3D motion tracking is a critical task in many computer vision applications.
Unsupervised markerless 3D motion tracking systems determine the most relevant
object in the screen and then track it by continuously estimating its
projection features (center and area) from the edge image and a point inside
the relevant object projection (namely, inner point), until the tracking fails.
Existing reliable object projection feature estimation techniques are based on
ray-casting or grid-filling from the inner point. These techniques assume the
edge image to be accurate. However, in real case scenarios, edge
miscalculations may arise from low contrast between the target object and its
surroundings or motion blur caused by low frame rates or fast moving target
objects. In this paper, we propose a barrier extension to casting-based
techniques that mitigates the effect of edge miscalculations.Comment: arXiv admin note: substantial text overlap with arXiv:1202.6586v1 and
arXiv:1111.396
Filling-Based Techniques Applied to Object Projection Feature Estimation
3D motion tracking is a critical task in many computer vision applications.
Unsupervised markerless 3D motion tracking systems determine the most relevant
object in the screen and then track it by continuously estimating its
projection features (center and area) from the edge image and a point inside
the relevant object projection (namely, inner point), until the tracking fails.
Existing object projection feature estimation techniques are based on
ray-casting from the inner point. These techniques present three main
drawbacks: when the inner point is surrounded by edges, rays may not reach
other relevant areas; as a consequence of that issue, the estimated features
may greatly vary depending on the position of the inner point relative to the
object projection; and finally, increasing the number of rays being casted and
the ray-casting iterations (which would make the results more accurate and
stable) increases the processing time to the point the tracking cannot be
performed on the fly. In this paper, we analyze an intuitive filling-based
object projection feature estimation technique that solves the aforementioned
problems but is too sensitive to edge miscalculations. Then, we propose a less
computing-intensive modification to that technique that would not be affected
by the existing techniques issues and would be no more sensitive to edge
miscalculations than ray-casting-based techniques.Comment: arXiv admin note: substantial text overlap with arXiv:1111.396
A DSL for Mapping Abstract Syntax Models to Concrete Syntax Models in ModelCC
ModelCC is a model-based parser generator that decouples language design from
language processing. ModelCC provides two different mechanisms to specify the
mapping from an abstract syntax model to a concrete syntax model: metadata
annotations defined on top of the abstract syntax model specification and a
domain-specific language for defining ASM-CSM mappings. Using a domain-specific
language to specify the mapping from abstract to concrete syntax models allows
the definition of multiple concrete syntax models for the same abstract syntax
model. In this paper, we describe the ModelCC domain-specific language for
abstract syntax model to concrete syntax model mappings and we showcase its
capabilities by providing a meta-definition of that domain-specific language.Comment: arXiv admin note: substantial text overlap with arXiv:1202.659
A Model-Driven Probabilistic Parser Generator
Existing probabilistic scanners and parsers impose hard constraints on the
way lexical and syntactic ambiguities can be resolved. Furthermore, traditional
grammar-based parsing tools are limited in the mechanisms they allow for taking
context into account. In this paper, we propose a model-driven tool that allows
for statistical language models with arbitrary probability estimators. Our work
on model-driven probabilistic parsing is built on top of ModelCC, a model-based
parser generator, and enables the probabilistic interpretation and resolution
of anaphoric, cataphoric, and recursive references in the disambiguation of
abstract syntax graphs. In order to prove the expression power of ModelCC, we
describe the design of a general-purpose natural language parser
A Lexical Analysis Tool with Ambiguity Support
Lexical ambiguities naturally arise in languages. We present Lamb, a lexical
analyzer that produces a lexical analysis graph describing all the possible
sequences of tokens that can be found within the input string. Parsers can
process such lexical analysis graphs and discard any sequence of tokens that
does not produce a valid syntactic sentence, therefore performing, together
with Lamb, a context-sensitive lexical analysis in lexically-ambiguous language
specifications
A Model-Driven Parser Generator, from Abstract Syntax Trees to Abstract Syntax Graphs
Model-based parser generators decouple language specification from language
processing. The model-driven approach avoids the limitations that conventional
parser generators impose on the language designer. Conventional tools require
the designed language grammar to conform to the specific kind of grammar
supported by the particular parser generator (being LL and LR parser generators
the most common). Model-driven parser generators, like ModelCC, do not require
a grammar specification, since that grammar can be automatically derived from
the language model and, if needed, adapted to conform to the requirements of
the given kind of parser, all of this without interfering with the conceptual
design of the language and its associated applications. Moreover, model-driven
tools such as ModelCC are able to automatically resolve references between
language elements, hence producing abstract syntax graphs instead of abstract
syntax trees as the result of the parsing process. Such graphs are not confined
to directed acyclic graphs and they can contain cycles, since ModelCC supports
anaphoric, cataphoric, and recursive references
The ModelCC Model-Based Parser Generator
Formal languages let us define the textual representation of data with
precision. Formal grammars, typically in the form of BNF-like productions,
describe the language syntax, which is then annotated for syntax-directed
translation and completed with semantic actions. When, apart from the textual
representation of data, an explicit representation of the corresponding data
structure is required, the language designer has to devise the mapping between
the suitable data model and its proper language specification, and then develop
the conversion procedure from the parse tree to the data model instance.
Unfortunately, whenever the format of the textual representation has to be
modified, changes have to propagated throughout the entire language processor
tool chain. These updates are time-consuming, tedious, and error-prone.
Besides, in case different applications use the same language, several copies
of the same language specification have to be maintained. In this paper, we
introduce ModelCC, a model-based parser generator that decouples language
specification from language processing, hence avoiding many of the problems
caused by grammar-driven parsers and parser generators. ModelCC incorporates
reference resolution within the parsing process. Therefore, instead of
returning mere abstract syntax trees, ModelCC is able to obtain abstract syntax
graphs from input strings.Comment: arXiv admin note: substantial text overlap with arXiv:1111.3970,
arXiv:1501.0203
Treating Insomnia, Amnesia, and Acalculia in Regular Expression Matching
Regular expressions provide a flexible means for matching strings and they
are often used in data-intensive applications. They are formally equivalent to
either deterministic finite automata (DFAs) or nondeterministic finite automata
(NFAs). Both DFAs and NFAs are affected by two problems known as amnesia and
acalculia, and DFAs are also affected by a problem known as insomnia. Existing
techniques require an automata conversion and compaction step that prevents the
use of existing automaton databases and hinders the maintenance of the
resulting compact automata. In this paper, we propose Parallel Finite State
Machines (PFSMs), which are able to run any DFA- or NFA-like state machines
without a previous conversion or compaction step. PFSMs report, online, all the
matches found within an input string and they solve the three aforementioned
problems. Parallel Finite State Machines require quadratic time and linear
memory and they are distributable. Parallel Finite State Machines make very
fast distributed regular expression matching in data-intensive applications
feasible
A Tool for Model-Based Language Specification
Formal languages let us define the textual representation of data with
precision. Formal grammars, typically in the form of BNF-like productions,
describe the language syntax, which is then annotated for syntax-directed
translation and completed with semantic actions. When, apart from the textual
representation of data, an explicit representation of the corresponding data
structure is required, the language designer has to devise the mapping between
the suitable data model and its proper language specification, and then develop
the conversion procedure from the parse tree to the data model instance.
Unfortunately, whenever the format of the textual representation has to be
modified, changes have to propagated throughout the entire language processor
tool chain. These updates are time-consuming, tedious, and error-prone.
Besides, in case different applications use the same language, several copies
of the same language specification have to be maintained. In this paper, we
introduce a model-based parser generator that decouples language specification
from language processing, hence avoiding many of the problems caused by
grammar-driven parsers and parser generators
Scanning and Parsing Languages with Ambiguities and Constraints: The Lamb and Fence Algorithms
Traditional language processing tools constrain language designers to
specific kinds of grammars. In contrast, model-based language processing tools
decouple language design from language processing. These tools allow the
occurrence of lexical and syntactic ambiguities in language specifications and
the declarative specification of constraints for resolving them. As a result,
these techniques require scanners and parsers able to parse context-free
grammars, handle ambiguities, and enforce constraints for disambiguation. In
this paper, we present Lamb and Fence. Lamb is a scanning algorithm that
supports ambiguous token definitions and the specification of custom pattern
matchers and constraints. Fence is a chart parsing algorithm that supports
ambiguous context-free grammars and the definition of constraints on
associativity, composition, and precedence, as well as custom constraints. Lamb
and Fence, in conjunction, enable the implementation of the ModelCC model-based
language processing tool.Comment: arXiv admin note: text overlap with arXiv:1111.3970, arXiv:1110.147
- …